Maximum Entropy Probabilistic Logic
نویسنده
چکیده
Recent research has shown there are two types of uncertainty that can be expressed in first-order logic— propositional and statistical uncertainty—and that both types can be represented in terms of probability spaces. However, these efforts have fallen short of providing a general account of how to design probability measures for these spaces; as a result, we lack a crucial component of any system that reasons under these types of uncertainty. In this paper, we describe an automatic procedure for defining such measures in terms of a probabilistic knowledge base. In particular, we employ the principle of maximum entropy to select measures that are consistent with our knowledge and that make the fewest assumptions in doing so. This approach yields models of first-order uncertainty that are principled, intuitive, and economical in their representation. Introduction Integrating representations of uncertainty with the expressive semantics of first-order logic is the theme of much research in Artificial Intelligence. Recent work has shown that there are two types of uncertainty that can be expressed in first-order logic: propositional uncertainty, where we are uncertain of the truth of logical sentences, and statistical uncertainty, where we are uncertain of the distribution of properties across objects (Bacchus 1990). This work also shows that both types of uncertainty can be represented in terms of probability spaces (Halpern 1990). However, these efforts have fallen short of providing a general account of how to design and represent probability measures for these spaces; as a result, we lack a crucial component of any system that reasons under propositional and statistical uncertainty. In this paper, we describe an automatic procedure for defining these measures in terms of a probabilistic knowledge base that contains certain and uncertain first-order knowledge. In general, our knowledge will be insufficient to determine the measures uniquely, and so we adopt the following strategy: we view the probabilistic knowledge base as a set of constraints, and of the measures that satisfy the constraints, we choose the one with maximum entropy (Jaynes 1979). We show that this choice leads to models of Copyright c 2002, American Association for Artificial Intelligence (www.aaai.org). All rights reserved. propositional and statistical uncertainty that are principled, intuitive, and economical in their representation. We begin by reviewing the basic concepts underlying propositional uncertainty and then discuss its connection to the principle of maximum entropy. Next, we show how recent algorithmic advances, which provide general-purpose machinery to implement the maximum entropy principle, may be applied to yield principled and compact representations of propositional uncertainty. We then extend the approach to include statistical uncertainty, and show how uncertain knowledge of each type may be used to inform inferences of the other. We conclude with a discussion of some important issues and a summary of related work. Degrees of Belief and Random Worlds Nilsson (1986) was among the first to consider the problem of representing propositional uncertainty, i.e., uncertainty regarding the truth of logical sentences. For example, an agent may be unsure of the truth of such sentences as flies Tweety or bird flies , and may ascribe a degree of belief (or probability of truth) to each. In this section, we describe one approach, which has become known as the random worlds formulation. The truth of logical sentences is defined in terms of possible worlds. Let be a finite first-order logic language (i.e., a collection of finitely many relation, function, and constant symbols, along with the usual variable symbols, connectives, quantifiers, and the equality symbol); let be the set of sentences of .1 A possible world (or structure) for consists of: a set of objects (called the domain of ); a set of relations over the domain, each corresponding to a relation symbol in ; and, a set of functions over the domain, each corresponding to a function symbol in . (As usual, constant symbols can be treated as function symbols of zero arity.) The universe of , denoted , is the set of all possible worlds for . We can represent the semantics of first-order logic by a deterministic valuation function T F : a sentence is either true (T) or false (F) in each possible world. Thus, if our agent knew which of the possible worlds is the actual world, then it could apply the valuation function to While we focus on first-order logic languages, the framework trivially admits propositional logic languages as a special case. infer the truth or falsehood of every sentence with certainty. We can therefore interpret its uncertainty regarding the truth of sentences as derivative of an underlying uncertainty regarding which of the possible worlds is the actual world. The probabilistic way to model this uncertainty is with a random world , i.e., a random variable that ranges over the possible worlds in ; is governed by a distribution (or measure) , called a world model. An agent’s world model expresses its degree of belief that any possible world is the actual world, and can be used to compute the degree of belief (sentence probability) of a sentence as: (1) where T is the set of models of . This notation generalizes naturally to sets of sentences, allowing us to express conditional degrees of belief. Several intuitive properties follow immediately from the random worlds formulation. For example, for all world models : if then ; if is valid, then ; if is unsatisfiable, then ; ! " #$ for all ; and if
منابع مشابه
Probabilistic Logic Programming under Maximum Entropy Justus-liebig- Universit at Gieeen Ifig Research Report Probabilistic Logic Programming under Maximum Entropy
In this paper, we focus on the combination of probabilistic logic programming with the principle of maximum entropy. We start by deening probabilistic queries to probabilistic logic programs and their answer substitutions under maximum entropy. We then present an eecient linear programming characterization for the problem of deciding whether a probabilistic logic program is satissable. Finally,...
متن کاملProbalilistic Logic Programming under Maximum Entropy
In this paper, we focus on the combination of probabilistic logic programming with the principle of maximum entropy. We start by deening probabilistic queries to probabilistic logic programs and their answer substitutions under maximum entropy. We then present an ef-cient linear programming characterization for the problem of deciding whether a probabilistic logic program is satissable. Finally...
متن کاملGeneration of Parametrically Uniform Knowledge Bases in a Relational Probabilistic Logic with Maximum Entropy Semantics
In a relational setting, the maximum entropy model of a set of probabilistic conditionals can be defined referring to the full set of ground instances of the conditionals. The logic FO-PCL uses the notion of parametric uniformity to ensure that the full grounding of the conditionals can be avoided, thereby greatly simplifying the maximum entropy model computation. In this paper, we describe a s...
متن کاملProbabilistic Reasoning in the Description Logic ALCP with the Principle of Maximum Entropy
A central question for knowledge representation is how to encode and handle uncertain knowledge adequately. We introduce the probabilistic description logic ALCP that is designed for representing context-dependent knowledge, where the actual context taking place is uncertain. ALCP allows the expression of logical dependencies on the domain and probabilistic dependencies on the possible contexts...
متن کاملProbabilistic Reasoning in the Description Logic ALCP with the Principle of Maximum Entropy (Full Version)
A central question for knowledge representation is how to encode and handle uncertain knowledge adequately. We introduce the probabilistic description logic ALCP that is designed for representing context-dependent knowledge, where the actual context taking place is uncertain. ALCP allows the expression of logical dependencies on the domain and probabilistic dependencies on the possible contexts...
متن کاملUsing Maximum Entropy in a Defeasible Logic with Probabilistic Semantics
Abs t rac t . In this paper we make defeasible inferences from conditional probabilities using the Principle of Total Evidence. This gives a logic that is a simple extension of the axiomatization of probabilistic logic as defined by Halpern's AX1. For our consequence relation, the reasoning is further justified by an assumption of the typicality of individuals mentioned in the data. For databas...
متن کامل